Goto

Collaborating Authors

 human target


Robot Guided Evacuation with Viewpoint Constraints

Chen, Gong, Meghjani, Malika, Prasetyo, Marcel Bartholomeus

arXiv.org Artificial Intelligence

We present a viewpoint-based non-linear Model Predictive Control (MPC) for evacuation guiding robots. Specifically, the proposed MPC algorithm enables evacuation guiding robots to track and guide cooperative human targets in emergency scenarios. Our algorithm accounts for the environment layout as well as distances between the robot and human target and distance to the goal location. A key challenge for evacuation guiding robot is the trade-off between its planned motion for leading the target toward a goal position and staying in the target's viewpoint while maintaining line-of-sight for guiding. We illustrate the effectiveness of our proposed evacuation guiding algorithm in both simulated and real-world environments with an Unmanned Aerial Vehicle (UAV) guiding a human. Our results suggest that using the contextual information from the environment for motion planning, increases the visibility of the guiding UAV to the human while achieving faster total evacuation time.


Inside Israel's Bombing Campaign in Gaza

The New Yorker

Since the war began in Gaza, more than six months ago, the Israeli magazine 972 has published some of the most penetrating reporting on the Israel Defense Forces' conduct. In November, 972, along with the Hebrew publication Local Call, found that the I.D.F. had expanded the number of "legitimate" military targets, leading to a huge increase in civilian casualties. Then earlier this month, 972 and Local Call released a long feature called "Lavender: The AI Machine Directing Israel's Bombing Spree in Gaza." The story revealed how the Israeli military had used the program to identify suspected militants, which in practice meant that tens of thousands of Palestinians had their homes marked as legitimate targets for bombing, with minimal human oversight. The I.D.F. also said that, according to its rules, "analysts must conduct independent examinations" to verify the identification of targets.

  Country:
  Industry: Government > Military (1.00)

'The machine did it coldly': Israel used AI to identify 37,000 Hamas targets

The Guardian

The Israeli military's bombing campaign in Gaza used a previously undisclosed AI-powered database that at one stage identified 37,000 potential targets based on their apparent links to Hamas, according to intelligence sources involved in the war. In addition to talking about their use of the AI system, called Lavender, the intelligence sources claim that Israeli military officials permitted large numbers of Palestinian civilians to be killed, particularly during the early weeks and months of the conflict. Their unusually candid testimony provides a rare glimpse into the first-hand experiences of Israeli intelligence officials who have been using machine-learning systems to help identify targets during the six-month war. Israel's use of powerful AI systems in its war on Hamas has entered uncharted territory for advanced warfare, raising a host of legal and moral questions, and transforming the relationship between military personnel and machines. "This is unparalleled, in my memory," said one intelligence officer who used Lavender, adding that they had more faith in a "statistical mechanism" than a grieving soldier.


Killer Flying Robots Are Here. What Do We Do Now?

#artificialintelligence

In the popular Terminator movies, a relentless super-robot played by Arnold Schwarzenegger tracks and attempts to kill human targets. It was pure science fiction in the 1980s. Today, killer robots hunting down targets have not only become reality, but are sold and deployed on the field of battle. The new Turkish-made Kargu-2 quadcopter drone can allegedly autonomously track and kill human targets on the basis of facial recognition and artificial intelligence--a big technological leap from the drone fleets requiring remote control by human operators. A United Nations Security Council report claims the Kargu-2 was used in Libya to mount autonomous attacks on human targets.


How Artificial Intelligence Threatens World Peace

#artificialintelligence

If you follow my blogs, you know that I've been focusing a fair amount of attention on artificial intelligence, and how it has raised reasons for both optimism and extreme ethical pause. In this one, I want to discuss how there is potential for a new conflict not dissimilar to the Cold War with the development and proliferation of nuclear energy; but this time AI will take centre stage of the theatre. Very much akin to nuclear expansion, artificial intelligence comes with its own bag of pros and cons. Indubitably, nuclear energy has been harnessed for the commonwealth of mankind. Water Desalination -- Reducing the saline content of seawater is extremely costly and inefficient.


Killer Drone Autonomously 'Hunted Down' a Human Target, UN Experts Say

#artificialintelligence

A "lethal" weaponized drone "hunted down" and "remotely engaged" human targets without its handlers' say-so during a conflict in Libya last year, according to a United Nations report first covered by New Scientist this week. Whether there were any casualties remains unclear, but if confirmed, it would likely be the first recorded death carried out by an autonomous killer robot. In March 2020, a Kargu-2 attack quadcopter, which the agency called a "lethal autonomous weapon system," targeted retreating soldiers and convoys led by Libyan National Army's Khalifa Haftar during a civil conflict with Libyan government forces. "The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true'fire, forget and find' capability," the UN Security Council's Panel of Experts on Libya wrote in the report. It remains unconfirmed whether any soldiers were killed in the attack, although the UN experts imply as much.


Autonomous 'killer drone' hunted down human targets

#artificialintelligence

… or moving targets through its indigenous and real-time image processing capabilities and machine learning algorithms embedded on the platform.


A rogue killer drone 'hunted down' a human target without being instructed to, UN report says

#artificialintelligence

A "lethal" weaponized drone "hunted down a human target" without being told to for the first time, according to a UN report seen by the New Scientist. The March 2020 incident saw a KARGU-2 quadcopter autonomously attack a human during a conflict between Libyan government forces and a breakaway military faction, led by the Libyan National Army's Khalifa Haftar, the Daily Star reported. The Turkish-built KARGU-2, a deadly attack drone designed for asymmetric warfare and anti-terrorist operations, targeted one of Haftar's soldiers while he tried to retreat, according to the paper. The drone, which can be directed to detonate on impact, was operating in a "highly effective" autonomous mode that required no human controller, the New York Post said. "The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true'fire, forget and find' capability," the report from the UN Security Council's Panel of Experts on Libya said.


Killer drone 'hunted down a human target' without being told to

FOX News

Fox News Flash top headlines are here. Check out what's clicking on Foxnews.com. Arnold Schwarzenegger could've seen this one coming. After a United Nations commission to block killer robots was shut down in 2018, a new report from the international body now says the Terminator-like drones are now here. Last year "an autonomous weaponized drone hunted down a human target last year" and attacked them without being specifically ordered to, according to a report from the UN Security Council's Panel of Experts on Libya, published in March 2021 that was published in the New Scientist magazine and the Star.


The persistent humanity in AI and cybersecurity

#artificialintelligence

Even as AI technology transforms some aspects of cybersecurity, the intersection of the two remains profoundly human. Although it's perhaps counterintuitive, humans are front and center in all parts of the cybersecurity triad: the bad actors who seek to do harm, the gullible soft targets, and the good actors who fight back. Even without the looming specter of AI, the cybersecurity battlefield is often opaque to average users and the technologically savvy alike. Adding a layer of AI, which comprises numerous technologies that can also feel unexplainable to most people, may seem doubly intractable -- as well as impersonal. That's because although the cybersecurity fight is sometimes deeply personal, it's rarely waged in person.